An NMF-L2,1-Norm Constraint Method for Characteristic Gene Selection
نویسندگان
چکیده
منابع مشابه
An NMF-L2,1-Norm Constraint Method for Characteristic Gene Selection
Recent research has demonstrated that characteristic gene selection based on gene expression data remains faced with considerable challenges. This is primarily because gene expression data are typically high dimensional, negative, non-sparse and noisy. However, existing methods for data analysis are able to cope with only some of these challenges. In this paper, we address all of these challeng...
متن کاملA Compact Formulation for the L21 Mixed-norm Minimization Problem
We present an equivalent, compact reformulation of the `2,1 mixednorm minimization problem for joint sparse signal reconstruction from multiple measurement vectors (MMVs). The reformulation builds upon a compact parameterization, which models the rownorms of the sparse signal representation as parameters of interest, resulting in a significant reduction of the MMV problem size. Given the sparse...
متن کاملOutlier Regularization for Vector Data and L21 Norm Robustness
In many real-world applications, data usually contain outliers. One popular approach is to use norm function as a robust error/loss function. However, the robustness of norm function is not well understood so far. In this paper, we propose a new Vector Outlier Regularization (VOR) framework to understand and analyze the robustness of data point to be outlier if it is outside a threshold with re...
متن کاملNon-Greedy L21-Norm Maximization for Principal Component Analysis
Principal Component Analysis (PCA) is one of the most important unsupervised methods to handle highdimensional data. However, due to the high computational complexity of its eigen decomposition solution, it hard to apply PCA to the large-scale data with high dimensionality. Meanwhile, the squared L2-norm based objective makes it sensitive to data outliers. In recent research, the L1-norm maximi...
متن کاملRobust Multiple Kernel K-means Using L21-Norm
The k-means algorithm is one of the most often used method for data clustering. However, the standard k-means can only be applied in the original feature space. The kernel k-means, which extends k-means into the kernel space, can be used to capture the non-linear structure and identify arbitrarily shaped clusters. Since both the standard k-means and kernel k-means apply the squared error to mea...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: PLOS ONE
سال: 2016
ISSN: 1932-6203
DOI: 10.1371/journal.pone.0158494